A Variational Inequality Perspective on Generative Adversarial Nets
نویسندگان
چکیده
Stability has been a recurrent issue in training generative adversarial networks (GANs). One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods specifically designed for this adversarial training. In this work, we review the “variational inequality” framework which contains most formulations of the GAN objective introduced so far. Taping into the mathematical programming literature, we counter some common misconceptions about the difficulties of saddle point optimization and propose to extend standard methods designed for variational inequalities to GANs training, such as a stochastic version of the extragradient method, and empirically investigate their behavior on GANs.
منابع مشابه
On the Ability of Neural Nets to Express Distributions
Deep neural nets have caused a revolution in many classification tasks. A related ongoing revolution—also theoretically not understood—concerns their ability to serve as generative models for complicated types of data such as images and texts. These models are trained using ideas like variational autoencoders and Generative Adversarial Networks. We take a first cut at explaining the expressivit...
متن کاملText Generation Based on Generative Adversarial Nets with Latent Variable
In this paper, we propose a model using generative adversarial net (GAN) to generate realistic text. Instead of using standard GAN, we combine variational autoencoder (VAE) with generative adversarial net. The use of high-level latent random variables is helpful to learn the data distribution and solve the problem that generative adversarial net always emits the similar data. We propose the VGA...
متن کاملConditional Generative Adversarial Nets
Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustr...
متن کاملAdversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference mo...
متن کاملSupplementary Material for Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
In the main text we derived Adversarial Variational Bayes (AVB) and demonstrated its usefulness both for black-box Variational Inference and for learning latent variable models. This document contains proofs that were omitted in the main text as well as some further details about the experiments and additional results.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1802.10551 شماره
صفحات -
تاریخ انتشار 2018